Creating new pipeline using seurat v4.0.2 available 2021.06.23
Important notes:
percent.mt, but NOT regressing on percent.mtnCounts_RNA and nFeature_RNALoad libraries required for Seuratv4
library(clustree)
Loading required package: ggraph
ScaleDatahttps://bioconductor.org/packages/3.10/workflows/vignettes/simpleSingleCell/inst/doc/batch.html#62_for_gene-based_analyses >You can also normalize and scale data for the RNA assay. There are numerous resources on this, but Aaron Lun describes why the original log-normalized values should be used for DE and visualizations of expression quite well here: > >For gene-based procedures like differential expression (DE) analyses or gene network construction, it is desirable to use the original log-expression values or counts. The corrected values are only used to obtain cell-level results such as clusters or trajectories. Batch effects are handled explicitly using blocking terms or via a meta-analysis across batches. We do not use the corrected values directly in gene-based analyses, for various reasons: > >It is usually inappropriate to perform DE analyses on batch-corrected values, due to the failure to model the uncertainty of the correction. This usually results in loss of type I error control, i.e., more false positives than expected. > >The correction does not preserve the mean-variance relationship. Applications of common DE methods like edgeR or limma are unlikely to be valid. > >Batch correction may (correctly) remove biological differences between batches in the course of mapping all cells onto a common coordinate system. Returning to the uncorrected expression values provides an opportunity for detecting such differences if they are of interest. Conversely, if the batch correction made a mistake, the use of the uncorrected expression values provides an important sanity check. > >In addition, the normalized values in SCT and integrated assays don’t necessary correspond to per-gene expression values anyway, rather containing residuals (in the case of the scale.data slot for each).
Mess with how to load 4 cell populations into single seurat object
SET SEED?????!!!!!
projectName <- "msAggr_seurat"
jackstraw.dim <- 40
sessionInfo.filename <- paste0(projectName, "_sessionInfo.txt")
sink(sessionInfo.filename)
sessionInfo()
sink()
source("~/Desktop/10XGenomicsData/msAggr_scRNASeq/RFunctions/read_10XGenomics_data.R")
source("~/Desktop/10XGenomicsData/msAggr_scRNASeq/RFunctions/PercentVariance.R")
source("~/Desktop/10XGenomicsData/msAggr_scRNASeq/RFunctions/ColorPalette.R")
source("~/Desktop/10XGenomicsData/msAggr_scRNASeq/RFunctions/ColorPalette.R")
setwd("~/Desktop/10XGenomicsData/cellRanger/") # temporarily changing wd only works if you run the entire chunk at once
Warning: The working directory was changed to /Users/heustonef/Desktop/10XGenomicsData/cellRanger inside a notebook chunk. The working directory will be reset when the chunk is finished running. Use the knitr root.dir option in the setup chunk to change the working directory for notebook chunks.
data_file.list <- read_10XGenomics_data(sample.list = c("LSKm2", "CMPm2", "MEPm", "GMPm"))
data.object<-Read10X(data_file.list)
seurat.object<- CreateSeuratObject(counts = data.object, min.cells = 3, min.genes = 200, project = projectName)
Clean up to free memory
remove(data.object)
Add mitochondrial metadata and plot some basic features
seurat.object[["percent.mt"]] <- PercentageFeatureSet(seurat.object, pattern = "^mt-")
VlnPlot(seurat.object, features = c("nFeature_RNA", "nCount_RNA", "percent.mt"), ncol = 3, pt.size = 0, fill.by = 'orig.ident', )
plot1 <- FeatureScatter(seurat.object, feature1 = "nCount_RNA", feature2 = "percent.mt", group.by = "orig.ident", pt.size = 0.01)
plot2 <- FeatureScatter(seurat.object, feature1 = "nCount_RNA", feature2 = "nFeature_RNA", group.by = "orig.ident", pt.size = 0.01)
plot1 + plot2
remove low quality cells require: nFeature_RNA between 200 and 4000 (inclusive) require: percent.mt <= 5
print(paste("original object:", nrow(seurat.object@meta.data), "cells", sep = " "))
[1] "original object: 41950 cells"
seurat.object <- subset(seurat.object,
subset = nFeature_RNA >=200 &
nFeature_RNA <= 4000 &
percent.mt <= 5
)
print(paste("new object:", nrow(seurat.object@meta.data), "cells", sep = " "))
[1] "new object: 37104 cells"
Struggling to wrap my head around this one. It seems that SCTransform is best for batch correction, but NormalizeData and ScaleData are best for DGE. Several vignettes have performed both
`selection.method
How to choose top variable features. Choose one of :
vst: First, fits a line to the relationship of log(variance) and log(mean) using local polynomial regression (loess). Then standardizes the feature values using the observed mean and expected variance (given by the fitted line). Feature variance is then calculated on the standardized values after clipping to a maximum (see clip.max parameter).
mean.var.plot (mvp): First, uses a function to calculate average expression (mean.function) and dispersion (dispersion.function) for each feature. Next, divides features into num.bin (deafult 20) bins based on their average expression, and calculates z-scores for dispersion within each bin. The purpose of this is to identify variable features while controlling for the strong relationship between variability and average expression.
dispersion (disp): selects the genes with the highest dispersion values`
seurat.object <- NormalizeData(seurat.object, normalization.method = "LogNormalize", scale.factor = 10000)
Performing log-normalization
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Find variable features
seurat.object <- FindVariableFeatures(seurat.object, selection.method = "vst", nfeatures = 2000)
Calculating gene variances
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Calculating feature variances of standardized and clipped values
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
top10 <- head(VariableFeatures(seurat.object), 10)
plot1 <- VariableFeaturePlot(seurat.object)
plot2 <- LabelPoints(plot = plot1, points = top10, repel = TRUE)
When using repel, set xnudge and ynudge to 0 for optimal results
plot1 + plot2
Warning: Transformation introduced infinite values in continuous x-axis
Warning: Removed 2 rows containing missing values (geom_point).
Warning: Transformation introduced infinite values in continuous x-axis
Warning: Removed 2 rows containing missing values (geom_point).
Scale data (linear transformation)
all.genes <- rownames(seurat.object)
seurat.object <- ScaleData(seurat.object, features = all.genes, vars.to.regress = c("nCount_RNA", "nFeature_RNA"))
Regressing out nCount_RNA, nFeature_RNA
|
| | 0%
|
|= | 0%
|
|= | 1%
|
|== | 1%
|
|=== | 1%
|
|=== | 2%
|
|==== | 2%
|
|===== | 2%
|
|===== | 3%
|
|====== | 3%
|
|======= | 3%
|
|======== | 3%
|
|======== | 4%
|
|========= | 4%
|
|========== | 4%
|
|========== | 5%
|
|=========== | 5%
|
|============ | 5%
|
|============ | 6%
|
|============= | 6%
|
|============== | 6%
|
|============== | 7%
|
|=============== | 7%
|
|================ | 7%
|
|================ | 8%
|
|================= | 8%
|
|================== | 8%
|
|================== | 9%
|
|=================== | 9%
|
|==================== | 9%
|
|==================== | 10%
|
|===================== | 10%
|
|====================== | 10%
|
|======================= | 10%
|
|======================= | 11%
|
|======================== | 11%
|
|========================= | 11%
|
|========================= | 12%
|
|========================== | 12%
|
|=========================== | 12%
|
|=========================== | 13%
|
|============================ | 13%
|
|============================= | 13%
|
|============================= | 14%
|
|============================== | 14%
|
|=============================== | 14%
|
|=============================== | 15%
|
|================================ | 15%
|
|================================= | 15%
|
|================================= | 16%
|
|================================== | 16%
|
|=================================== | 16%
|
|=================================== | 17%
|
|==================================== | 17%
|
|===================================== | 17%
|
|====================================== | 17%
|
|====================================== | 18%
|
|======================================= | 18%
|
|======================================== | 18%
|
|======================================== | 19%
|
|========================================= | 19%
|
|========================================== | 19%
|
|========================================== | 20%
|
|=========================================== | 20%
|
|============================================ | 20%
|
|============================================ | 21%
|
|============================================= | 21%
|
|============================================== | 21%
|
|============================================== | 22%
|
|=============================================== | 22%
|
|================================================ | 22%
|
|================================================ | 23%
|
|================================================= | 23%
|
|================================================== | 23%
|
|=================================================== | 23%
|
|=================================================== | 24%
|
|==================================================== | 24%
|
|===================================================== | 24%
|
|===================================================== | 25%
|
|====================================================== | 25%
|
|======================================================= | 25%
|
|======================================================= | 26%
|
|======================================================== | 26%
|
|========================================================= | 26%
|
|========================================================= | 27%
|
|========================================================== | 27%
|
|=========================================================== | 27%
|
|=========================================================== | 28%
|
|============================================================ | 28%
|
|============================================================= | 28%
|
|============================================================= | 29%
|
|============================================================== | 29%
|
|=============================================================== | 29%
|
|=============================================================== | 30%
|
|================================================================ | 30%
|
|================================================================= | 30%
|
|================================================================== | 30%
|
|================================================================== | 31%
|
|=================================================================== | 31%
|
|==================================================================== | 31%
|
|==================================================================== | 32%
|
|===================================================================== | 32%
|
|====================================================================== | 32%
|
|====================================================================== | 33%
|
|======================================================================= | 33%
|
|======================================================================== | 33%
|
|======================================================================== | 34%
|
|========================================================================= | 34%
|
|========================================================================== | 34%
|
|========================================================================== | 35%
|
|=========================================================================== | 35%
|
|============================================================================ | 35%
|
|============================================================================ | 36%
# save.image(file = paste0(projectName, '.RData'))
saveRDS(seurat.object, file = paste0(projectName, "_raw.RDS"))
linear dimensional reduction. Default are based on VariableFeatures, but can be changed
seurat.object <- RunPCA(seurat.object, features = VariableFeatures(object = seurat.object))
PC_ 1
Positive: Car2, Car1, Blvrb, Klf1, Mt2, Vamp5, Ermap, Aqp1, C1qtnf12, Rhd
Sphk1, Ces2g, Tspo2, Cldn13, Gm15915, Slc38a5, Gstm5, Smim1, Abcb4, Gclm
Ybx3, Nxpe2, Ctse, Cenpv, Stard10, Mns1, Gata1, Slc25a21, Sdsl, Epor
Negative: Tmsb4x, Prtn3, Mpo, Ctsg, Tyrobp, Plac8, Elane, Clec12a, Ly6c2, Slpi
Sh3bgrl3, Anxa3, Pkm, Hp, Emb, Fcer1g, Ms4a3, Pgam1, Ms4a6c, Irf8
Coro1a, Serpinb1a, BC035044, Lgals1, H2afy, Arhgdib, Igsf6, Alas1, Ly86, Cst3
PC_ 2
Positive: Ctla2a, Malat1, Ifitm1, Hlf, Gimap6, Jund, Fos, Tsc22d1, Ltb, Gimap1
Tmem176b, Pim1, Adgrl4, Ifitm3, Sox4, Zfp36, Adgrg1, Cd34, Gcnt2, Rgs1
Klf2, Gm5111, Cd27, Ypel3, Klf6, Gm19590, Dusp1, Dusp2, Ifi203, Shisa5
Negative: H2afz, Ybx1, Ppia, Atp5g1, Ran, Mt1, Ranbp1, Cycs, Nme1, Hmgb2
Cks2, Slc25a5, Cox5a, Cks1b, Ap3s1, Atpif1, Dut, Eif5a, Ptma, Sdf2l1
Nhp2, Dynll1, Elane, Mrpl18, Atp5o, Ms4a3, Tmem14c, Chchd2, Ly6c2, Birc5
PC_ 3
Positive: Ube2c, Nusap1, Plbd1, Cenpf, Birc5, Lgals3, Aif1, Mki67, Id2, Hmmr
Top2a, Prc1, Ifi205, Cdca8, Kif23, Batf3, H2afx, Cenpa, Tpx2, Ccnb1
Ccnb2, Kif22, Cdc20, Plk1, Cenpe, Hmgb2, Pimreg, Racgap1, H2-Aa, Cdca3
Negative: Cd63, Srgn, Mif, Ung, Prtn3, Rgcc, Ms4a3, Srm, Gstm1, Elane
Nkg7, Hspd1, Cebpe, Alas1, Fkbp11, Hsp90ab1, Mpo, Ctsg, Prdx6, Trem3
Calr, Gsr, Serpinb1a, Rps2, Nme1, Prss57, Anxa3, Cst7, Fcgr3, Rack1
PC_ 4
Positive: Srgn, Nkg7, Itga2b, Cd9, Pf4, Ube2c, Lockd, Apoe, Cavin2, Nusap1
Cenpf, H1fx, Cks2, Serpine2, Cd63, Gata2, Hmmr, Pimreg, Cdc20, Tuba8
Ckap2l, Pbx1, Prc1, Cdca8, Ccnb2, Pdcd4, Rab27b, Tpx2, Malat1, Rgs18
Negative: Aif1, Ctss, Irf8, Cd74, Lsp1, Plbd1, Lgals3, H2-Aa, Cd52, Id2
Ifi205, H2-Eb1, Ccr2, Pld4, Batf3, Psap, Ighm, Mpeg1, H2-Ab1, Ckb
Ly86, Itgb7, Ms4a4c, Ifi30, Ctsh, Fth1, Naaa, Ms4a6c, Tmsb10, Jaml
PC_ 5
Positive: Ftl1, S100a8, Pglyrp1, Clec4a2, Rgcc, Gstm1, S100a6, Trem3, Gda, Wfdc21
Lgals3, Dstn, Slfn2, Prdx5, Hp, H2-Eb1, Ly6c2, H2-Aa, Cd74, Cebpe
Gng11, Cd63, Mcemp1, Fcgr3, Ms4a3, Mmrn1, Mt1, Hacd4, Pdzk1ip1, Selenom
Negative: Stmn1, H2afy, Ptma, Plac8, Hsp90ab1, Tmsb10, Rps2, Cd34, Bcl2, Hspa8
Sox4, Npm1, Hist1h2ap, Cpa3, Cd48, Tespa1, Satb1, Ccl9, Fos, Ppia
Hist1h2ae, Hist1h2ac, Hmgb1, Lat2, Fabp5, BC035044, Jun, Dntt, Adgrg3, Egr1
Plot results
VizDimLoadings(seurat.object, dims = 1:6, nfeatures = 10, reduction = "pca", ncol = 2)
DimPlot colored by orig.ident
DimPlot(seurat.object, reduction = "pca", group.by = "orig.ident")
Let’s put in a concerted effort to pick the right dimensionality using the newest software
```r
# jackstraw.dim <- 40
# seurat.object <- JackStraw(seurat.object, num.replicate = 100, dims = jackstraw.dim) #runs ~50 min
# seurat.object <- ScoreJackStraw(seurat.object, dims = 1:jackstraw.dim)
# save.image(paste0(projectName, \.RData\))
<!-- rnb-source-end -->
<!-- rnb-chunk-end -->
<!-- rnb-text-begin -->
Draw dim.reduction plots
<!-- rnb-text-end -->
<!-- rnb-chunk-begin -->
<!-- rnb-source-begin eyJkYXRhIjoiYGBgclxuYGBgclxuIyBKYWNrU3RyYXdQbG90KHNldXJhdC5vYmplY3QsIGRpbXMgPSAyNTozNilcbmBgYFxuYGBgIn0= -->
```r
```r
# JackStrawPlot(seurat.object, dims = 25:36)
<!-- rnb-source-end -->
<!-- rnb-chunk-end -->
<!-- rnb-chunk-begin -->
<!-- rnb-source-begin eyJkYXRhIjoiYGBgclxuRWxib3dQbG90KHNldXJhdC5vYmplY3QsIG5kaW1zID0gNTApXG5wZXJjZW50LnZhcmlhbmNlKHNldXJhdC5vYmplY3RAcmVkdWN0aW9ucyRwY2FAc3RkZXYpXG5gYGAifQ== -->
```r
ElbowPlot(seurat.object, ndims = 50)
percent.variance(seurat.object@reductions$pca@stdev)
Number of PCs describing X% of variance
ElbowPlot(seurat.object, ndims = 50)
percent.variance(seurat.object@reductions$pca@stdev)
Percent of PCs describing X% of variance (transcribed from above cell because I don’t know how to freeze results)
Num pcs for 80% variance: 12 Num pcs for 85% variance: 18 Num pcs for 90% variance: 26 Num pcs for 95% variance: 37
set total.var <- 80%
tot.var <- percent.variance(seurat.object@reductions$pca@stdev, plot.var = FALSE, return.val = TRUE)
paste0("Num pcs for 80% variance: ", length(which(cumsum(tot.var) <= 80)))
[1] "Num pcs for 80% variance: 12"
paste0("Num pcs for 85% variance: ", length(which(cumsum(tot.var) <= 85)))
[1] "Num pcs for 85% variance: 18"
paste0("Num pcs for 90% variance: ", length(which(cumsum(tot.var) <= 90)))
[1] "Num pcs for 90% variance: 26"
paste0("Num pcs for 95% variance: ", length(which(cumsum(tot.var) <= 95)))
[1] "Num pcs for 95% variance: 37"
Plot UMAP
for(x in c(0.5, 1, 1.5, 2, 2.5)){
seurat.object <- FindClusters(seurat.object, resolution = x)
}
for (meta.col in colnames(seurat.object@meta.data)){
if(grepl(pattern = ("RNA_snn_res"), x = meta.col)==TRUE){
myplot <- DimPlot(seurat.object,
group.by = meta.col,
reduction = "umap",
cols = color.palette
) +
ggtitle(paste0(projectName, " dim", ndims, "res", gsub("RNA_snn_res", "", meta.col) ))
plot(myplot)
}
}
saveRDS(seurat.object, file = paste0(projectName, "_dim", ndims, ".RDS"))
Generate statistics for each cluster/resolution combo
current_res <- 'RNA_snn_res.0.5'
cluster_ids <- sort(unique(seurat.object@meta.data[,current_res]))
counts_df <- data.frame(matrix(nrow = length(cluster_ids), ncol = 4))
rownames(counts_df) <- cluster_ids
colnames(counts_df) <- c("LSKm2", "CMPm2", "MEPm", "GMPm")
for(id in cluster_ids){
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "LSKm2"),])
counts_df[id, "LSKm2"] = cell_value
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "LSKm2"),])
counts_df[id, "CMPm2"] = cell_value
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "MEPm"),])
counts_df[id, "MEPm"] = cell_value
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "GMPm"),])
counts_df[id, "GMPm"] = cell_value
}
clustree(seurat.object, prefix = "RNA_snn_res.", node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'data', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'scale.data', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'counts', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
set total.var <- 85%
tot.var <- percent.variance(seurat.object@reductions$pca@stdev, plot.var = FALSE, return.val = TRUE)
ndims <- length(which(cumsum(tot.var) <= 85))
seurat.object <- FindNeighbors(seurat.object, dims = 1:ndims)
seurat.object <- FindClusters(seurat.object, resolution = 0.5)
seurat.object <- RunUMAP(seurat.object, dims = 1: ndims)
Plot UMAP
tot.var <- percent.variance(seurat.object@reductions$pca@stdev, plot.var = FALSE, return.val = TRUE)
ndims <- length(which(cumsum(tot.var) <= 85))
seurat.object <- FindNeighbors(seurat.object, dims = 1:ndims)
Computing nearest neighbor graph
Computing SNN
seurat.object <- FindClusters(seurat.object, resolution = 0.5)
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1210978
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8975
Number of communities: 15
Elapsed time: 7 seconds
seurat.object <- RunUMAP(seurat.object, dims = 1: ndims)
10:28:27 UMAP embedding parameters a = 0.9922 b = 1.112
10:28:27 Read 37104 rows and found 18 numeric columns
10:28:27 Using Annoy for neighbor search, n_neighbors = 30
10:28:27 Building Annoy index with metric = cosine, n_trees = 50
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
10:28:30 Writing NN index file to temp file /var/folders/4f/fwrj6fnn1dn4g8wsf0zv563hjsvl24/T//RtmpMfek2E/file6cd4e41ca37
10:28:30 Searching Annoy index using 1 thread, search_k = 3000
10:28:38 Annoy recall = 100%
10:28:39 Commencing smooth kNN distance calibration using 1 thread
10:28:40 Initializing from normalized Laplacian + noise
10:28:42 Commencing optimization for 200 epochs, with 1505258 positive edges
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
10:28:58 Optimization finished
for(x in c(0.5, 1, 1.5, 2, 2.5)){
seurat.object <- FindClusters(seurat.object, resolution = x)
}
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1210978
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8975
Number of communities: 15
Elapsed time: 7 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1210978
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8630
Number of communities: 23
Elapsed time: 6 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1210978
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8382
Number of communities: 31
Elapsed time: 7 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1210978
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8202
Number of communities: 36
Elapsed time: 6 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1210978
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8042
Number of communities: 45
Elapsed time: 6 seconds
for (meta.col in colnames(seurat.object@meta.data)){
if(grepl(pattern = ("RNA_snn_res"), x = meta.col)==TRUE){
myplot <- DimPlot(seurat.object,
group.by = meta.col,
reduction = "umap",
cols = color.palette
) +
ggtitle(paste0(projectName, " dim", ndims, "res", gsub("RNA_snn_res", "", meta.col) ))
plot(myplot)
}
}
Generate statistics for each cluster/resolution combo
saveRDS(seurat.object, file = paste0(projectName, "_dim", ndims, ".RDS"))
Must ensure we have the right cluster stability, that is, cells that start in the same cluster tend to stay in the same cluster. If your data is over-clustered, cells will bounce between groups.
Following [this tutorial by Matt O.].https://towardsdatascience.com/10-tips-for-choosing-the-optimal-number-of-clusters-277e93d72d92. Previously my favourite has been Clustree, which gives a nice visual NB: For some reason clustree::clustree() didn’t work, whereas library(clustree) followed by clustree() did.
current_res <- 'RNA_snn_res.0.5'
cluster_ids <- sort(unique(seurat.object@meta.data[,current_res]))
counts_df <- data.frame(matrix(nrow = length(cluster_ids), ncol = 4))
rownames(counts_df) <- cluster_ids
colnames(counts_df) <- c("LSKm2", "CMPm2", "MEPm", "GMPm")
for(id in cluster_ids){
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "LSKm2"),])
counts_df[id, "LSKm2"] = cell_value
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "LSKm2"),])
counts_df[id, "CMPm2"] = cell_value
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "MEPm"),])
counts_df[id, "MEPm"] = cell_value
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "GMPm"),])
counts_df[id, "GMPm"] = cell_value
}
clustree(seurat.object, prefix = "RNA_snn_res.", node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
Warning: The `add` argument of `group_by()` is deprecated as of dplyr 1.0.0.
Please use the `.add` argument instead.
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'data', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'scale.data', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'counts', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
Plot UMAP
tot.var <- percent.variance(seurat.object@reductions$pca@stdev, plot.var = FALSE, return.val = TRUE)
ndims <- length(which(cumsum(tot.var) <= 90))
seurat.object <- FindNeighbors(seurat.object, dims = 1:ndims)
Computing nearest neighbor graph
Computing SNN
seurat.object <- FindClusters(seurat.object, resolution = 0.5)
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8962
Number of communities: 13
Elapsed time: 8 seconds
seurat.object <- RunUMAP(seurat.object, dims = 1: ndims)
09:54:54 UMAP embedding parameters a = 0.9922 b = 1.112
09:54:54 Read 37104 rows and found 26 numeric columns
09:54:54 Using Annoy for neighbor search, n_neighbors = 30
09:54:54 Building Annoy index with metric = cosine, n_trees = 50
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
09:54:57 Writing NN index file to temp file /var/folders/4f/fwrj6fnn1dn4g8wsf0zv563hjsvl24/T//RtmpMfek2E/file6cd429d4fb06
09:54:57 Searching Annoy index using 1 thread, search_k = 3000
09:55:05 Annoy recall = 100%
09:55:06 Commencing smooth kNN distance calibration using 1 thread
09:55:07 Initializing from normalized Laplacian + noise
09:55:09 Commencing optimization for 200 epochs, with 1544862 positive edges
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
09:55:25 Optimization finished
for(x in c(0.5, 1, 1.5, 2, 2.5)){
seurat.object <- FindClusters(seurat.object, resolution = x)
}
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8962
Number of communities: 13
Elapsed time: 8 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8602
Number of communities: 23
Elapsed time: 6 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8355
Number of communities: 30
Elapsed time: 6 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8156
Number of communities: 36
Elapsed time: 7 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8006
Number of communities: 42
Elapsed time: 7 seconds
for (meta.col in colnames(seurat.object@meta.data)){
if(grepl(pattern = ("RNA_snn_res"), x = meta.col)==TRUE){
myplot <- DimPlot(seurat.object,
group.by = meta.col,
reduction = "umap",
cols = color.palette
) +
ggtitle(paste0(projectName, " dim", ndims, "res", gsub("RNA_snn_res", "", meta.col) ))
plot(myplot)
}
}
Generate statistics for each cluster/resolution combo
saveRDS(seurat.object, file = paste0(projectName, "_dim", ndims, ".RDS"))
current_res <- 'RNA_snn_res.0.5'
cluster_ids <- sort(unique(seurat.object@meta.data[,current_res]))
counts_df <- data.frame(matrix(nrow = length(cluster_ids), ncol = 4))
rownames(counts_df) <- cluster_ids
colnames(counts_df) <- c("LSKm2", "CMPm2", "MEPm", "GMPm")
for(id in cluster_ids){
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "LSKm2"),])
counts_df[id, "LSKm2"] = cell_value
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "LSKm2"),])
counts_df[id, "CMPm2"] = cell_value
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "MEPm"),])
counts_df[id, "MEPm"] = cell_value
cell_value <- nrow(seurat.object@meta.data[(seurat.object@meta.data[current_res] == id) &
(seurat.object@meta.data$orig.ident == "GMPm"),])
counts_df[id, "GMPm"] = cell_value
}
clustree(seurat.object, prefix = "RNA_snn_res.", node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
png(filename = paste0(projectName, "_dim", ndims, "-clustree.png"), height = 800, width = 1600)
clustree(seurat.object, prefix = "RNA_snn_res.", node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
dev.off()
quartz_off_screen
2
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'data', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'scale.data', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'counts', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
Plot UMAP
tot.var <- percent.variance(seurat.object@reductions$pca@stdev, plot.var = FALSE, return.val = TRUE)
ndims <- length(which(cumsum(tot.var) <= 95))
seurat.object <- FindNeighbors(seurat.object, dims = 1:ndims)
Computing nearest neighbor graph
Computing SNN
seurat.object <- FindClusters(seurat.object, resolution = 0.5)
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1328463
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8955
Number of communities: 13
Elapsed time: 7 seconds
seurat.object <- RunUMAP(seurat.object, dims = 1: ndims)
10:02:17 UMAP embedding parameters a = 0.9922 b = 1.112
10:02:17 Read 37104 rows and found 37 numeric columns
10:02:17 Using Annoy for neighbor search, n_neighbors = 30
10:02:17 Building Annoy index with metric = cosine, n_trees = 50
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
10:02:21 Writing NN index file to temp file /var/folders/4f/fwrj6fnn1dn4g8wsf0zv563hjsvl24/T//RtmpMfek2E/file6cd45dac5c5e
10:02:21 Searching Annoy index using 1 thread, search_k = 3000
10:02:29 Annoy recall = 100%
10:02:30 Commencing smooth kNN distance calibration using 1 thread
10:02:31 Initializing from normalized Laplacian + noise
10:02:32 Commencing optimization for 200 epochs, with 1590084 positive edges
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
10:02:49 Optimization finished
for(x in c(0.5, 1, 1.5, 2, 2.5)){
seurat.object <- FindClusters(seurat.object, resolution = x)
}
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8962
Number of communities: 13
Elapsed time: 8 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8602
Number of communities: 23
Elapsed time: 6 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8355
Number of communities: 30
Elapsed time: 6 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8156
Number of communities: 36
Elapsed time: 7 seconds
Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck
Number of nodes: 37104
Number of edges: 1255141
Running Louvain algorithm...
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Maximum modularity in 10 random starts: 0.8006
Number of communities: 42
Elapsed time: 7 seconds
for (meta.col in colnames(seurat.object@meta.data)){
if(grepl(pattern = ("RNA_snn_res"), x = meta.col)==TRUE){
myplot <- DimPlot(seurat.object,
group.by = meta.col,
reduction = "umap",
cols = color.palette
) +
ggtitle(paste0(projectName, " dim", ndims, "res", gsub("RNA_snn_res", "", meta.col) ))
plot(myplot)
}
}
Generate statistics for each cluster/resolution combo
saveRDS(seurat.object, file = paste0(projectName, "_dim", ndims, ".RDS"))
current_res <- 'RNA_snn_res.0.5'
cluster_ids <- sort(unique(seurat.object@meta.data[,current_res]))
clustree(seurat.object, prefix = "RNA_snn_res.", node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'data', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'scale.data', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
I think clusters from 90% variance at 0.5 and 1.0 resolution are most stable. We’ll do some statistics and DGE on that. Will also need to go back and play with SCTransform, since these are multiple cell types from multiple lanes.
clustree(seurat.object, prefix = "RNA_snn_res.", expres = 'counts', node_colour = "sc3_stability") +
scale_color_continuous(low = 'red3', high = 'white')
Cluster stability could be influenced by: * cells in each population (cellranger v6 includes more cells than cellranger v1, especially in MEP) * dimensionality is incorrect * ScaleData didnt account for regression factors (e.g., “nCounts_RNA” or “nFeatures_RNA”) * Did not consider cell cycle * Incorrect normalization/scaling method * Clustering is too strict or not strict enough * neighborhood analysis used wrong parameters * Should include mitoC filter (there’s a chunk of MEP w/ mitoC @ ~40%) * SCTransform accounts better for sources of variability
# Number of filtered cells left in each pop
sapply(c("LSKm2", "CMPm2", "MEPm", "GMPm"), function(x) (c(nrow(seurat.object@meta.data[seurat.object@meta.data$orig.ident == x,]))))
```r
# Number of filtered cells left in each pop
sapply(c(\LSKm2\, \CMPm2\, \MEPm\, \GMPm\), function(x) (c(nrow(seurat.object@meta.data[seurat.object@meta.data$orig.ident == x,]))))
<!-- rnb-source-end -->
<!-- rnb-chunk-end -->
<!-- rnb-text-begin -->
Looks like MEPm is the only sample with that huge MitoC % lump @ 40%. What do these cells look like, otherwise?
<!-- rnb-text-end -->
<!-- rnb-chunk-begin -->
<!-- rnb-source-begin eyJkYXRhIjoiYGBgclxuYGBgclxuZm9yICh4IGluIGMoXFxMU0ttMlxcLCBcXENNUG0yXFwsIFxcTUVQbVxcLCBcXEdNUG1cXCkpe1xuXHRoID0gaGlzdChzZXVyYXQub2JqZWN0QG1ldGEuZGF0YVtzZXVyYXQub2JqZWN0QG1ldGEuZGF0YSRvcmlnLmlkZW50ID09IHgsICdwZXJjZW50Lm10J10sIGJyZWFrcyA9IDMwLCBwbG90ID0gRkFMU0UpXG5cdGgkZGVuc2l0eSA9IGgkY291bnRzL3N1bShoJGNvdW50cykqMTAwXG5cdHBsb3QoaCxmcmVxPUZBTFNFLCBtYWluID0gIHBhc3RlKHgsIFxccGVyY2VudCBtaXRvQ1xcKSwgeGxhYiA9IFxccGVyY2VudCBtaXRvQ1xcLCB5bGFiID0gXFxGcmVxdWVuY3lcXClcbn1cbmBgYFxuYGBgIn0= -->
```r
```r
for (x in c(\LSKm2\, \CMPm2\, \MEPm\, \GMPm\)){
h = hist(seurat.object@meta.data[seurat.object@meta.data$orig.ident == x, 'percent.mt'], breaks = 30, plot = FALSE)
h$density = h$counts/sum(h$counts)*100
plot(h,freq=FALSE, main = paste(x, \percent mitoC\), xlab = \percent mitoC\, ylab = \Frequency\)
}
```